linear algebra
- Health & Medicine > Therapeutic Area > Oncology (0.96)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.71)
I Background in Linear Algebra
In this section we state some elementary results that we will use for our main proofs. The next Lemma is part of the proof of [44, Lemma 4.2], which we state here as a separate result to save some space from the longer proofs that follow later. This is part of the proof of [44, Lemma 4.2]. In this section we specialize the definitions to the case of Gaussian matrices. Lemma 7. Let n 1 be an integer, and δ (0, 1/2) .
the suggested references, and add a "broader impact " section 1 (We apologize for not realizing that we are required to
We thank all the reviewers for their time and for their thoughtful comments. Is grammar-compression useful for vectors and matrices encountered in ML? (a concern raised by reviewer #3) We prove that grammar compressions are harder to analyze (without decompression) than simpler ones like RLE. In the submission, we have only commented on this question in passing. Fortunately, such a test was already performed in the "Compressed Linear Algebra" paper [ To quote reviewer #4: " In this sense, proving a limitation of those models will greatly influence future research W e wish to sincerely thank the reviewers and the PC again for their time and help in improving the quality of this work. The ethical consequences depend on the specific application.
spd-metrics-id: A Python Package for SPD-Aware Distance Metrics in Connectome Fingerprinting and Beyond
We present spd-metrics-id, a Python package for computing distances and divergences between symmetric positive-definite (SPD) matrices. Unlike traditional toolkits that focus on specific applications, spd-metrics-id provides a unified, extensible, and reproducible framework for SPD distance computation. The package supports a wide variety of geometry-aware metrics, including Alpha-z Bures-Wasserstein, Alpha-Procrustes, affine-invariant Riemannian, log-Euclidean, and others, and is accessible both via a command-line interface and a Python API. Reproducibility is ensured through Docker images and Zenodo archiving. We illustrate usage through a connectome fingerprinting example, but the package is broadly applicable to covariance analysis, diffusion tensor imaging, and other domains requiring SPD matrix comparison. The package is openly available at https://pypi.org/project/spd-metrics-id/.
- Africa > Middle East > Egypt (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.99)
- Health & Medicine > Health Care Technology (0.69)
I Background in Linear Algebra
In this section we state some elementary results that we will use for our main proofs. The next Lemma is part of the proof of [44, Lemma 4.2], which we state here as a separate result to save some space from the longer proofs that follow later. This is part of the proof of [44, Lemma 4.2]. In this section we specialize the definitions to the case of Gaussian matrices. Lemma 7. Let n 1 be an integer, and δ (0, 1/2) .
- Health & Medicine > Therapeutic Area > Oncology (0.96)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.71)
CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Moreover, CoLA provides memory efficient automatic differentiation, low precision computation, and GPU acceleration in both JAX and PyTorch, while also accommodating new objects, operations, and rules in downstream packages via multiple dispatch. CoLA can accelerate many algebraic operations, while making it easy to prototype matrix structures and algorithms, providing an appealing drop-in tool for virtually any computational effort that requires linear algebra. We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Moreover, CoLA provides memory efficient automatic differentiation, low precision computation, and GPU acceleration in both JAX and PyTorch, while also accommodating new objects, operations, and rules in downstream packages via multiple dispatch. CoLA can accelerate many algebraic operations, while making it easy to prototype matrix structures and algorithms, providing an appealing drop-in tool for virtually any computational effort that requires linear algebra. We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
Analysis of Diagnostics (Part II): Prevalence, Linear Independence, and Unsupervised Learning
Patrone, Paul N., Binder, Raquel A., Forconi, Catherine S., Moormann, Ann M., Kearsley, Anthony J.
This is the second manuscript in a two-part series that uses diagnostic testing to understand the connection between prevalence (i.e. number of elements in a class), uncertainty quantification (UQ), and classification theory. Part I considered the context of supervised machine learning (ML) and established a duality between prevalence and the concept of relative conditional probability. The key idea of that analysis was to train a family of discriminative classifiers by minimizing a sum of prevalence-weighted empirical risk functions. The resulting outputs can be interpreted as relative probability level-sets, which thereby yield uncertainty estimates in the class labels. This procedure also demonstrated that certain discriminative and generative ML models are equivalent. Part II considers the extent to which these results can be extended to tasks in unsupervised learning through recourse to ideas in linear algebra. We first observe that the distribution of an impure population, for which the class of a corresponding sample is unknown, can be parameterized in terms of a prevalence. This motivates us to introduce the concept of linearly independent populations, which have different but unknown prevalence values. Using this, we identify an isomorphism between classifiers defined in terms of impure and pure populations. In certain cases, this also leads to a nonlinear system of equations whose solution yields the prevalence values of the linearly independent populations, fully realizing unsupervised learning as a generalization of supervised learning. We illustrate our methods in the context of synthetic data and a research-use-only SARS-CoV-2 enzyme-linked immunosorbent assay (ELISA).
- North America > United States > New York > New York County > New York City (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- (5 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Unsupervised or Indirectly Supervised Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.48)